845 research outputs found

    Bayesian Probabilistic Power Flow Analysis Using Jacobian Approximate Bayesian Computation

    Get PDF
    A probabilistic power flow (PPF) study is an essential tool for the analysis and planning of a power system when specific variables are considered as random variables with particular probability distributions. The most widely used method for solving the PPF problem is Monte Carlo simulation (MCS). Although MCS is accurate for obtaining the uncertainty of the state variables, it is also computationally expensive, since it relies on repetitive deterministic power flow solutions. In this paper, we introduce a different perspective for the PPF problem. We frame the PPF as a probabilistic inference problem, and instead of repetitively solving optimization problems, we use Bayesian inference for computing posterior distributions over state variables. Additionally, we provide a likelihood-free method based on the Approximate Bayesian Computation philosophy, that incorporates the Jacobian computed from the power flow equations. Results in three different test systems show that the proposed methodologies are competitive alternatives for solving the PPF problem, and in some cases, they allow for reduction in computation time when compared to MCS

    Efficient modeling of latent information in supervised learning using Gaussian processes

    Get PDF
    Often in machine learning, data are collected as a combination of multiple conditions, e.g., the voice recordings of multiple persons, each labeled with an ID. How could we build a model that captures the latent information related to these conditions and generalize to a new one with few data? We present a new model called Latent Variable Multiple Output Gaussian Processes (LVMOGP) that allows to jointly model multiple conditions for regression and generalize to a new condition with a few data points at test time. LVMOGP infers the posteriors of Gaussian processes together with a latent space representing the information about different conditions. We derive an efficient variational inference method for LVMOGP for which the computational complexity is as low as sparse Gaussian processes. We show that LVMOGP significantly outperforms related Gaussian process methods on various tasks with both synthetic and real data

    Physically-inspired Gaussian process models for post-transcriptional regulation in Drosophila

    Get PDF
    The regulatory process of Drosophila is thoroughly studied for understanding a great variety of biological principles. While pattern-forming gene networks are analysed in the transcription step, post-transcriptional events (e.g. translation, protein processing) play an important role in establishing protein expression patterns and levels. Since the post-transcriptional regulation of Drosophila depends on spatiotemporal interactions between mRNAs and gap proteins, proper physically-inspired stochastic models are required to study the link between both quantities. Previous research attempts have shown that using Gaussian processes (GPs) and differential equations lead to promising predictions when analysing regulatory networks. Here we aim at further investigating two types of physically-inspired GP models based on a reaction-diffusion equation where the main difference lies in where the prior is placed. While one of them has been studied previously using protein data only, the other is novel and yields a simple approach requiring only the differentiation of kernel functions. In contrast to other stochastic frameworks, discretising the spatial space is not required here. Both GP models are tested under different conditions depending on the availability of gap gene mRNA expression data. Finally, their performances are assessed on a high-resolution dataset describing the blastoderm stage of the early embryo of Drosophila melanogaster

    Tensor decomposition processes for interpolation of diffusion magnetic resonance imaging

    Get PDF
    Diffusion magnetic resonance imaging (dMRI) is an established medical technique used for describing water diffusion in an organic tissue. Typically, rank-2 or 2nd-order tensors quantify this diffusion. From this quantification, it is possible to calculate relevant scalar measures (i.e. fractional anisotropy) employed in the clinical diagnosis of neurological diseases. Nonetheless, 2nd-order tensors fail to represent complex tissue structures like crossing fibers. To overcome this limitation, several researchers proposed a diffusion representation with higher order tensors (HOT), specifically 4th and 6th orders. However, the current acquisition protocols of dMRI data allow images with a spatial resolution between 1 mm3 and 2 mm3, and this voxel size is much bigger than tissue structures. Therefore, several clinical procedures derived from dMRI may be inaccurate. Concerning this, interpolation has been used to enhance the resolution of dMRI in a tensorial space. Most interpolation methods are valid only for rank-2 tensors and a generalization for HOT data is missing. In this work, we propose a probabilistic framework for performing HOT data interpolation. In particular, we introduce two novel probabilistic models based on the Tucker and the canonical decompositions. We call our approaches: Tucker decomposition process (TDP) and canonical decomposition process (CDP). We test the TDP and CDP in rank-2, 4 and 6 HOT fields. For rank-2 tensors, we compare against direct interpolation, log-Euclidean approach, and Generalized Wishart processes. For rank-4 and 6 tensors, we compare against direct interpolation and raw dMRI interpolation. Results obtained show that TDP and CDP interpolate accurately the HOT fields in terms of Frobenius distance, anisotropy measurements, and fiber tracts. Besides, CDP and TDP can be generalized to any rank. Also, the proposed framework keeps the mandatory constraint of positive definite tensors, and preserves morphological properties such as fractional anisotropy (FA), generalized anisotropy (GA) and tractography

    Comparison of methods for the identification and sub-typing of O157 and non-O157 Escherichia coli serotypes and their integration into a polyphasic taxonomy approach

    Get PDF
    peer-reviewedPhenotypic, chemotaxonomic and genotypic data from 12 strains of Escherichia coli were collected, including carbon source utilisation profiles, ribotypes, sequencing data of the 16S–23S rRNA internal transcribed region (ITS) and Fourier transform-infrared (FT-IR) spectroscopic profiles. The objectives were to compare several identification systems for E. coli and to develop and test a polyphasic taxonomic approach using the four methodologies combined for the sub-typing of O157 and non-O157 E. coli. The nucleotide sequences of the 16S–23S rRNA ITS regions were amplified by polymerase chain reaction (PCR), sequenced and compared with reference data available at the GenBank database using the Basic Local Alignment Search Tool (BLAST) . Additional information comprising the utilisation of carbon sources, riboprint profiles and FT-IR spectra was also collected. The capacity of the methods for the identification and typing of E. coli to species and subspecies levels was evaluated. Data were transformed and integrated to present polyphasic hierarchical clusters and relationships. The study reports the use of an integrated scheme comprising phenotypic, chemotaxonomic and genotypic information (carbon source profile, sequencing of the 16S–23S rRNA ITS, ribotyping and FT-IR spectroscopy) for a more precise characterisation and identification of E. coli. The results showed that identification of E. coli strains by each individual method was limited mainly by the extension and quality of reference databases. On the contrary, the polyphasic approach, whereby heterogeneous taxonomic data were combined and weighted, improved the identification results, gave more consistency to the final clustering and provided additional information on the taxonomic structure and phenotypic behaviour of strains, as shown by the close clustering of strains with similar stress resistance patterns.The authors acknowledge the financial contribution of the Spanish INIA, the Research Council of Norway (project 178230/I10), Foundation for Levy on Foods, the Norwegian Research Fees Fund for Agricultural Goods, the Norwegian Independent Meat and Poultry Association, Nortura SA and NHO Matog Landbruk

    Radiation Hydrodynamical Instabilities in Cosmological and Galactic Ionization Fronts

    Full text link
    Ionization fronts, the sharp radiation fronts behind which H/He ionizing photons from massive stars and galaxies propagate through space, were ubiquitous in the universe from its earliest times. The cosmic dark ages ended with the formation of the first primeval stars and galaxies a few hundred Myr after the Big Bang. Numerical simulations suggest that stars in this era were very massive, 25 - 500 solar masses, with H II regions of up to 30,000 light-years in diameter. We present three-dimensional radiation hydrodynamical calculations that reveal that the I-fronts of the first stars and galaxies were prone to violent instabilities, enhancing the escape of UV photons into the early intergalactic medium (IGM) and forming clumpy media in which supernovae later exploded. The enrichment of such clumps with metals by the first supernovae may have led to the prompt formation of a second generation of low-mass stars, profoundly transforming the nature of the first protogalaxies. Cosmological radiation hydrodynamics is unique because ionizing photons coupled strongly to both gas flows and primordial chemistry at early epochs, introducing a hierarchy of disparate characteristic timescales whose relative magnitudes can vary greatly throughout a given calculation. We describe the adaptive multistep integration scheme we have developed for the self-consistent transport of both cosmological and galactic ionization fronts.Comment: 6 pages, 4 figures, accepted for proceedings of HEDLA2010, Caltech, March 15 - 18, 201

    A Comprehensive Analysis of Choroideremia: From Genetic Characterization to Clinical Practice.

    Get PDF
    Choroideremia (CHM) is a rare X-linked disease leading to progressive retinal degeneration resulting in blindness. The disorder is caused by mutations in the CHM gene encoding REP-1 protein, an essential component of the Rab geranylgeranyltransferase (GGTase) complex. In the present study, we evaluated a multi-technique analysis algorithm to describe the mutational spectrum identified in a large cohort of cases and further correlate CHM variants with phenotypic characteristics and biochemical defects of choroideremia patients. Molecular genetic testing led to the characterization of 36 out of 45 unrelated CHM families (80%), allowing the clinical reclassification of four CHM families. Haplotype reconstruction showed independent origins for the recurrent p.Arg293* and p.Lys178Argfs*5 mutations, suggesting the presence of hotspots in CHM, as well as the identification of two different unrelated events involving exon 9 deletion. No certain genotype-phenotype correlation could be established. Furthermore, all the patients´ fibroblasts analyzed presented significantly increased levels of unprenylated Rabs proteins compared to control cells; however, this was not related to the genotype. This research demonstrates the major potential of the algorithm proposed for diagnosis. Our data enhance the importance of establish a differential diagnosis with other retinal dystrophies, supporting the idea of an underestimated prevalence of choroideremia. Moreover, they suggested that the severity of the disorder cannot be exclusively explained by the genotype

    The Formation of Cosmic Structures in a Light Gravitino Dominated Universe

    Get PDF
    We analyse the formation of cosmic structures in models where the dark matter is dominated by light gravitinos with mass of 100 100 eV -- 1 keV, as predicted by gauge-mediated supersymmetry (SUSY) breaking models. After evaluating the number of degrees of freedom at the gravitinos decoupling (gg_*), we compute the transfer function for matter fluctuations and show that gravitinos behave like warm dark matter (WDM) with free-streaming scale comparable to the galaxy mass scale. We consider different low-density variants of the WDM model, both with and without cosmological constant, and compare the predictions on the abundances of neutral hydrogen within high-redshift damped Ly--α\alpha systems and on the number density of local galaxy clusters with the corresponding observational constraints. We find that none of the models satisfies both constraints at the same time, unless a rather small Ω0\Omega_0 value (\mincir 0.4) and a rather large Hubble parameter (\magcir 0.9) is assumed. Furthermore, in a model with warm + hot dark matter, with hot component provided by massive neutrinos, the strong suppression of fluctuation on scales of \sim 1\hm precludes the formation of high-redshift objects, when the low--zz cluster abundance is required. We conclude that all different variants of a light gravitino DM dominated model show strong difficulties for what concerns cosmic structure formation. This gives a severe cosmological constraint on the gauge-mediated SUSY breaking scheme.Comment: 28 pages,Latex, submitted for publication to Phys.Rev.

    Non-invasive ventilation in obesity hypoventilation syndrome without severe obstructive sleep apnoea

    Get PDF
    Background Non-invasive ventilation (NIV) is an effective form of treatment in patients with obesity hypoventilation syndrome (OHS) who have concomitant severe obstructive sleep apnoea (OSA). However, there is a paucity of evidence on the efficacy of NIV in patients with OHS without severe OSA. We performed a multicentre randomised clinical trial to determine the comparative efficacy of NIV versus lifestyle modification (control group) using daytime arterial carbon dioxide tension (PaCO2) as the main outcome measure. Methods Between May 2009 and December 2014 we sequentially screened patients with OHS without severe OSA. Participants were randomised to NIV versus lifestyle modification and were followed for 2 months. Arterial blood gas parameters, clinical symptoms, health-related quality of life assessments, polysomnography, spirometry, 6-min walk distance test, blood pressure measurements and healthcare resource utilisation were evaluated. Statistical analysis was performed using intention-to-treat analysis. Results A total of 365 patients were screened of whom 58 were excluded. Severe OSA was present in 221 and the remaining 86 patients without severe OSA were randomised. NIV led to a significantly larger improvement in PaCO2 of -6 (95% CI -7.7 to -4.2) mm Hg versus -2.8 (95% CI -4.3 to -1.3) mm Hg, (p<0.001) and serum bicarbonate of -3.4 (95% CI -4.5 to -2.3) versus -1 (95% CI -1.7 to -0.2 95% CI) mmol/L (p<0.001). PaCO2 change adjusted for NIV compliance did not further improve the inter-group statistical significance. Sleepiness, some health-related quality of life assessments and polysomnographic parameters improved significantly more with NIV than with lifestyle modification. Additionally, there was a tendency towards lower healthcare resource utilisation in the NIV group. Conclusions NIV is more effective than lifestyle modification in improving daytime PaCO2, sleepiness and polysomnographic parameters. Long-term prospective studies are necessary to determine whether NIV reduces healthcare resource utilisation, cardiovascular events and mortality

    Sensitization of cervix cancer cells to Adriamycin by Pentoxifylline induces an increase in apoptosis and decrease senescence

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Chemotherapeutic drugs like Adriamycin (ADR) induces apoptosis or senescence in cancer cells but these cells often develop resistance and generate responses of short duration or complete failure. The methylxantine drug Pentoxifylline (PTX) used routinely in the clinics setting for circulatory diseases has been recently described to have antitumor properties. We evaluated whether pretreatment with PTX modifies apoptosis and senescence induced by ADR in cervix cancer cells.</p> <p>Methods</p> <p>HeLa (HPV 18+), SiHa (HPV 16+) cervix cancer cells and non-tumorigenic immortalized HaCaT cells (control) were treated with PTX, ADR or PTX + ADR. The cellular toxicity of PTX and survival fraction were determinated by WST-1 and clonogenic assay respectively. Apoptosis, caspase activation and ADR efflux rate were measured by flow cytometry, senescence by microscopy. IκBα and DNA fragmentation were determinated by ELISA. Proapoptotic, antiapoptotic and senescence genes, as well as HPV-E6/E7 mRNA expression, were detected by time real RT-PCR. p53 protein levels were assayed by Western blot.</p> <p>Results</p> <p>PTX is toxic (WST-1), affects survival (clonogenic assay) and induces apoptosis in cervix cancer cells. Additionally, the combination of this drug with ADR diminished the survival fraction and significantly increased apoptosis of HeLa and SiHa cervix cancer cells. Treatments were less effective in HaCaT cells. We found caspase participation in the induction of apoptosis by PTX, ADR or its combination. Surprisingly, in spite of the antitumor activity displayed by PTX, our results indicate that methylxantine, <it>per se </it>does not induce senescence; however it inhibits senescence induced by ADR and at the same time increases apoptosis. PTX elevates IκBα levels. Such sensitization is achieved through the up-regulation of proapoptotic factors such as <it>caspase </it>and <it>bcl </it>family gene expression. PTX and PTX + ADR also decrease E6 and E7 expression in SiHa cells, but not in HeLa cells. p53 was detected only in SiHa cells treated with ADR.</p> <p>Conclusion</p> <p>PTX is a good inducer of apoptosis but does not induce senescence. Furthermore, PTX reduced the ADR-induced senescence and increased apoptosis in cervix cancer cells.</p
    corecore